8 research outputs found

    Proprioceptive Inference for Dual-Arm Grasping of Bulky Objects Using RoboSimian

    Get PDF
    This work demonstrates dual-arm lifting of bulky objects based on inferred object properties (center of mass (COM) location, weight, and shape) using proprioception (i.e. force torque measurements). Data-driven Bayesian models describe these quantities, which enables subsequent behaviors to depend on confidence of the learned models. Experiments were conducted using the NASA Jet Propulsion Laboratory's (JPL) RoboSimian to lift a variety of cumbersome objects ranging in mass from 7kg to 25kg. The position of a supporting second manipulator was determined using a particle set and heuristics that were derived from inferred object properties. The supporting manipulator decreased the initial manipulator's load and distributed the wrench load more equitably across each manipulator, for each bulky object. Knowledge of the objects came from pure proprioception (i.e. without reliance on vision or other exteroceptive sensors) throughout the experiments

    Proprioceptive Inference for Dual-Arm Grasping of Bulky Objects Using RoboSimian

    Get PDF
    This work demonstrates dual-arm lifting of bulky objects based on inferred object properties (center of mass (COM) location, weight, and shape) using proprioception (i.e. force torque measurements). Data-driven Bayesian models describe these quantities, which enables subsequent behaviors to depend on confidence of the learned models. Experiments were conducted using the NASA Jet Propulsion Laboratory's (JPL) RoboSimian to lift a variety of cumbersome objects ranging in mass from 7kg to 25kg. The position of a supporting second manipulator was determined using a particle set and heuristics that were derived from inferred object properties. The supporting manipulator decreased the initial manipulator's load and distributed the wrench load more equitably across each manipulator, for each bulky object. Knowledge of the objects came from pure proprioception (i.e. without reliance on vision or other exteroceptive sensors) throughout the experiments

    Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    Get PDF
    The use of the cognitive capabilities of humans to help guide the autonomy of robotics platforms in what is typically called “supervised-autonomy” is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a “Supervised Remote Robot with Guided Autonomy and Teleoperation” (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of “behaviors” to chain together sequences of “actions” for the robot to perform which is then executed real time

    An Architecture for Online Affordance-based Perception and Whole-body Planning

    Get PDF
    The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule

    Mobility Erosion: High speed motion safety for mobile robots operating in off-road terrain

    No full text
    This paper addresses the problem of ensuring mobile robot motion safety when reacting to soft and hard hazards in a static environment. The work is aimed at off-road navigation for mobile ground robots where soft hazards are posed by varying terrain conditions (e.g. deformable soil, slopes, vegetation). Soft hazards pose operating constraints (i.e. speed limits) to the mobile robot that need to be satisfied to ensure motion safety. This paper presents a new morphological erosion operator that generalizes binary obstacle growing to mobility space (the space of speed limits) to deal with both hard and soft hazards seamlessly. This ensures that topological constraints due to vehicle size as well as momentum are taken into account, and leads to a straight-forward approach to generalize the concept of `regions of inevitable collision' for soft hazards.U.S. Army Research LaboratoryUnited States. Army Research Office (Contract/Grant W911NF-11-C-0101

    Reactive control in environments with hard and soft hazards

    No full text
    In this paper we present a generalization of reactive obstacle avoidance algorithms for mobile robots operating among soft hazards such as off-road slopes and deformable terrain. A new hazard avoidance scheme generalizes constraint based reactive algorithms [1], [2] from hard to soft hazards. Reactive controllers operate by directly parameterizing the closedloop dynamics of the system with respect to the environment the robot is operating in. Traditionally, reactive controllers are parameterized by weighting virtual attraction and repulsion forces from goals and obstacles [3], [4]. One pitfall of such parameterizations is sensitivity of the tuning parameters to the operating environment. A reactive controller tuned in one set of conditions is not applicable in another (e.g. a different density of obstacles). The algorithm presented in this paper has two key properties which are significant i) Parameterization is environment independent. ii) It can deal with non-binary environments that contain soft hazards.U.S. Army Research LaboratoryUnited States. Army Research Office (Contract/Grant W911NF-11-C-0101

    Team RoboSimian: Semi-autonomous Mobile Manipulation at the 2015 DARPA Robotics Challenge Finals

    No full text
    This paper discusses hardware and software improvements to the RoboSimian system leading up to and during the 2015 DARPA Robotics Challenge (DRC) Finals. Team RoboSimian achieved a 5th place finish by achieving 7 points in 47:59 min. We present an architecture that was structured to be adaptable at the lowest level and repeatable at the highest level. The low-level adaptability was achieved by leveraging tactile measurements from force torque sensors in the wrist coupled with whole-body motion primitives. We use the term “behaviors” to conceptualize this low-level adaptability. Each behavior is a contact-triggered state machine that enables execution of short-order manipulation and mobility tasks autonomously. At a high level, we focused on a teach-and-repeat style of development by storing executed behaviors and navigation poses in an object/task frame for recall later. This enabled us to perform tasks with high repeatability on competition day while being robust to task differences from practice to execution
    corecore